Conversation
…architecture support
…gging and installation
|
I do want to slim this down further, but not change too much to make it too much harder to use. The rough plan was to install docker tooling inside this container and sharing the |
…and updating image descriptions
…ilding and pushing the image
this is the code I have been working on locally to create a display container that is **seperate** and **not ros enabled** the idea with it not being ros enabled and running on debian:trixie-slim is to keep the size of this display container down, and reduce complexity.
|
@marc-hanheide do we want to rebase this onto the |
I'm not convinced by making an assumption of access to docker socket. That's a deployment question that shouldn't have an impact on the container. we can use devcontainer labels to facilitate attaching to it |
yes, feel free to merge #1 if happy with it and bring it all together |
|
then my
with the |
|
I understand that exposing the Docker socket is something we cannot rely on to access other containers terminals. But this was an idea on how to implement the user having access to spawn items from other containers inside the web view. I am now considering the purpose of accessing anything other than running a display inside this container, or to put it simply, why/when does a user/developer need to launch any apps inside the container? I am a little concerned about implementing the vnc container on top of a ROS enabled base with a minimal ROS environment. As if the display container is only being used as a x11 destination, this will then make the container larger than it needs to be (and have more deps). Unless what you're considering is to have this as an alternative base for using with development allows for a similar experience to what is inside LCAS/docker_cuda_desktop. From my perspective that is adding a lot more complexity when it comes to real world deployment as you're using a completely different environment from deployment. Should the developer be running the I think it would be best to work through to an MVP, so what I am thinking and my concept can be fully demoed. Then we can go from there and decide which is the best way forward. |
OK, I think I understand (maybe). You want “just” an X Server that other containers can connect to (via normal X protocol?), So, they may set That is very different to what I had in mind, indeed then. To clarify what I had in mind:
Then whenever we build bespoke images they are built on top of either these flavours. E.g., I would suggest a dedicated “rviz”/rqt image, build on top of CUDA-enabled VNC. This can then be used in many instances to introspect another container But, by all means, complete a MVP as I’m still not sure I fully get it. |
|
Any update @cooperj ? |
|
Hey @marc-hanheide this is ready for some more oversight, I have working demo for you to look at: vnc-demo.mp4If you want to replicate this, this is the following structure you need: File Structure
compose.yaml
services:
vnc:
build:
context: ./aoc_container_base
dockerfile: vnc.dockerfile
user: "lcas"
ports:
- "5801:5801"
volumes:
- x11:/tmp/.X11-unix
- /var/run/docker.sock:/var/run/docker.sock
devices:
- /dev/dri:/dev/dri # GPU access for VirtualGL
shm_size: '2gb'
stdin_open: true
tty: true
networks:
- aoc_ros
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu, compute, utility, graphics]
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
simulator:
build:
context: ./aoc_robot_simulator
dockerfile: .devcontainer/Dockerfile
user: "ros"
runtime: nvidia
volumes:
- x11:/tmp/.X11-unix
depends_on:
- vnc
stdin_open: true
tty: true
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu, compute, utility, graphics]
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
- DISPLAY=:1.0
- ROS_DOMAIN_ID=1
- ROBOT_MODEL=hunter
- ROBOT_NAME=hunter_001
- USE_SIM=true
- WORLD=person_walking
networks:
- aoc_ros
hri:
build:
context: ./aoc_hri
dockerfile: .devcontainer/Dockerfile
user: "ros"
runtime: nvidia
stdin_open: true
tty: true
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu, compute, utility, graphics]
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
- DISPLAY=:1.0
- USE_SIM=true
volumes:
- x11:/tmp/.X11-unix
depends_on:
- vnc
- simulator
networks:
- aoc_ros
volumes:
x11:
networks:
aoc_ros:Questions to answer:
I propose all ROS work should happen in a separate container, and if required all debugging tasks should be ran in a separate container which is connected to both the network (so can get ros topics and data) and the x11 socket. Would this debugging container take the role of being the terminals? Do we provide that with code server? |
Do this to allow for extra flexibility allowing for consumers to pick and choose which tools they need. This also allows us to use this exact same image to build other containers using the same format and chop and change bits easily
There was a problem hiding this comment.
Pull request overview
Adds a new lightweight, non-ROS VNC/noVNC “display” container (Debian trixie-slim based) plus a devtools variant, and wires both into the existing multi-image GitHub Actions build pipeline.
Changes:
- Introduce
vncimage (TurboVNC + XFCE + noVNC) with a runtime entrypoint and wallpapers. - Add
vnc_devtoolsimage layer with Docker CLI tooling for interacting with sibling containers. - Update CI workflows: split ROS builds into a dedicated reusable workflow and add build/push jobs for the VNC images.
Reviewed changes
Copilot reviewed 7 out of 11 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
vnc.dockerfile |
Defines the Debian-based VNC/noVNC desktop image (TurboVNC, XFCE, noVNC). |
vnc_devtools.dockerfile |
Extends the VNC base image with Docker CLI and extra tooling. |
docker/vnc-entrypoint.sh |
Starts VNC server, XFCE session, and noVNC proxy; applies xhost and wallpaper. |
docker/wallpapers/aoc.jpg |
Adds wallpaper asset used by the VNC container. |
docker/wallpapers/lcas.jpg |
Adds wallpaper asset used by the VNC container. |
.github/workflows/docker-build-and-push.yaml |
Adds vnc/vnc-devtools build jobs; switches ROS jobs to ROS-specific reusable workflow. |
.github/workflows/_build-ros-image.yaml |
New reusable workflow for ROS images with ROS-distro-specific tags/caching. |
.github/workflows/_build-image.yaml |
Simplifies non-ROS image workflow inputs/tags/caching. |
README.md |
Documents the new lcas.lincoln.ac.uk/vnc image. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| screen -dmS turbovnc bash -c '/opt/TurboVNC/bin/vncserver :1 -depth 24 -noxstartup -securitytypes TLSNone,X509None,None 2>&1 | tee /tmp/vnc.log; read -p "Press any key to continue..."' | ||
|
|
| DISPLAY=:1 xhost +local: 2>/dev/null | ||
| echo "xhost +local: applied to :1" |
| echo "xfce4 up" | ||
|
|
||
| echo "starting novnc ${NOVNC_VERSION}" | ||
| screen -dmS novnc bash -c '/usr/local/novnc/noVNC-${NOVNC_VERSION}/utils/novnc_proxy --vnc localhost:5901 --listen 5801 2>&1 | tee /tmp/novnc.log' |
| groupadd -g $DOCKER_GID docker && \ | ||
| usermod -aG docker ${username} |
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
* Initial plan * Remove unused VNC_PORT env var from vnc.dockerfile Co-authored-by: cooperj <28831674+cooperj@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: cooperj <28831674+cooperj@users.noreply.github.com>
|
Regarding authentication and listen addresses, I did spend some time trying to play with getting this working previously. The way I see copilot suggesting is letting the dev have an option of setting a password on the noVNC interface which is disabled by default. What I think we should do is the following options:
I feel like we should be going in the order of preference to 2->1->3, with options 1 and 2 being the easiest to implement. |
this is the code I have been working on locally to create a display container that is seperate and not ros enabled
the idea with it not being ros enabled and running on debian:trixie-slim is to keep the size of this display container down, and reduce complexity.
This code is to be merged into the PR #1